A Globally Convergent Stabilized SQP Method
نویسندگان
چکیده
Sequential quadratic programming (SQP) methods are a popular class of methods for nonlinearly constrained optimization. They are particularly effective for solving a sequence of related problems, such as those arising in mixed-integer nonlinear programming and the optimization of functions subject to differential equation constraints. Recently, there has been considerable interest in the formulation of stabilized SQP methods, which are specifically designed to handle degenerate optimization problems. Existing stabilized SQP methods are essentially local in the sense that both the formulation and analysis focus on the properties of the methods in a neighborhood of a solution. A new SQP method is proposed that has favorable global convergence properties yet, under suitable assumptions, is equivalent to a variant of the conventional stabilized SQP method in the neighborhood of a solution. The method combines a primal-dual generalized augmented Lagrangian function with a flexible line search to obtain a sequence of improving estimates of the solution. The method incorporates a convexification algorithm that allows the use of exact second derivatives to define a convex quadratic programming (QP) subproblem without requiring that the Hessian of the Lagrangian be positive definite in the neighborhood of a solution. This gives the potential for fast convergence in the neighborhood of a solution. Additional benefits of the method are that each QP subproblem is regularized and the QP subproblem always has a known feasible point. Numerical experiments are presented for a subset of the problems from the CUTEr test collection.
منابع مشابه
A Globally Convergent Stabilized Sqp Method: Superlinear Convergence
Regularized and stabilized sequential quadratic programming (SQP) methods are two classes of methods designed to resolve the numerical and theoretical difficulties associated with ill-posed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that allows convergence to points satisfying certain second-order KKT conditions (SIAM J. Optim., 23(4):198...
متن کاملA globally and superlinearly convergent trust-region SQP method without a penalty function for nonlinearly constrained optimization
In this paper, we propose a new trust-region SQP method, which uses no penalty function, for solving nonlinearly constrained optimization problem. Our method consists of alternate two algorithms. Specifically, we alternately proceed the feasibility restoration algorithm and the objective function minimization algorithm. The global and superlinear convergence property of the proposed method is s...
متن کاملGlobalizing Stabilized Sqp by Smooth Primal-dual Exact Penalty Function
An iteration of the stabilized sequential quadratic programming method (sSQP) consists in solving a certain quadratic program in the primal-dual space, regularized in the dual variables. The advantage with respect to the classical sequential quadratic programming (SQP) is that no constraint qualifications are required for fast local convergence (i.e., the problem can be degenerate). In particul...
متن کاملA new double trust regions SQP method without a penalty function or a filter∗
A new trust-region SQP method for equality constrained optimization is considered. This method avoids using a penalty function or a filter, and yet can be globally convergent to first-order critical points under some reasonable assumptions. Each SQP step is composed of a normal step and a tangential step for which different trust regions are applied in the spirit of Gould and Toint [Math. Progr...
متن کاملA Superlinearly feasible SQP algorithm for Constrained Optimization
This paper is concerned with a Superlinearly feasible SQP algorithm algorithm for general constrained optimization. As compared with the existing SQP methods, it is necessary to solve equality constrained quadratic programming sub-problems at each iteration, which shows that the computational effort of the proposed algorithm is reduced further. Furthermore, under some mild assumptions, the algo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- SIAM Journal on Optimization
دوره 23 شماره
صفحات -
تاریخ انتشار 2013